Introduction to Adaptive Trial Designs

Jay Park

Core Clinical Sciences & McMaster University

March 5, 2024

Learning objectives

  1. Discuss the challenges of conventional approaches to clinical research

\(~\)

  1. Identify the concept and key principles of adaptive trial designs

\(~\)

  1. Discuss existing standards and reporting guidelines of adaptive trial designs and their practical considerations

Conventional approach to clinical trial research

Conventional trial designs (fixed sample trial designs)

  • General steps follow: Design, Conduct, and then Analysis

\(~\)

  • At the Design stage, we perform a power or a sample size calculation to determine or to justify a target sample size (or number of events)

\(~\)

  • During the Conduct stage, we enroll and follow up patients according to protocol

\(~\)

  • The Analysis stage comes once we finish with the last patient follow-up.

    • In conventional trial designs, we conduct a single analysis at the end

Sample size and statistical power

  • Regardless of the design, determination of sample size and statistical power is a fundamental step to clinical trial research

\(~\)

  • Trials should be planned with sufficiently large sample size that allows high statistical power to detect clinically important treatment effect

\(~\)

  • Informally, statistical power refers to the probability of detecting an effect, if there is a true effect
    • Power = 1 - type II error

Overview of sample size and power calculation

To determine sample size requirement or power, we need to specify

  1. False positive (type I) and false negative (type II) error rates: What error rates are we willing to accept?
  • Conventionally, we pick 5% type I error rate (two-sided) and 10-20% type II error rate (80-90% statistical power)

  • All else being equal, sample size requirements will increase when we want higher power and lower false positive

  1. Event rates / variability and effect size

    • We generally require less sample size as control event rate (CER) increases

      • The same when the effect size increases
  2. Dropout rates and etc

Quick planning exercise: A 2-arm trial for mortality

When planning an RCT with a dichotomous endpoint, the sample size calculation requires pre-specification of:

Parameters Details
Statistical power & type I error rate We generally need 80% power and 5% type I error rate to stay competitive for grant
Control event rate What % of control patients do we expect will die by day 28?
Desired (or expected) treatment effects How effective do we want the treatment to be?
Drop out rate How many patients will drop out of the study? For today, let’s assume this is 0

How do we determine the control event rate for our trial?

  • It is usually estimated from previous literature, such as previously trial on similar population

\(~\)

  • Even if an estimate exists from a similar clinical trial, large uncertainty on CER still remains

    • Different sites, time, recruitment, and etc.

    • We always recruit patients based on convenience sampling, rarely ever based on random sampling

    • Even if we could recruit a random sample of our target population, we know there are random variations

How do we determine the target effect size?

  • Ideally, we want to design our trial such that we have high power (e.g., 80-90%) to detect minimal clinically important difference (MCID)

    • MCID represents the smallest improvement that is considered worthwhile

\(~\)

  • We want the target effect size to be small, but this increases our sample size requirement

    • What do we do if it is not possible to recruit such large sample size?

Sample size calculation exercise

  • Let’s assume control event rate of 0.3

  • Say we want 80% power and 5% type I error rate

  • A treatment that can reduce the relative risk of dying at day 28

    • Reduction in relative risk should be at least 0.25 to be worth while

Sample size calculation exercise - Continued

Sample size required for 80% power at 5% type I error rate
Control Event Rate Relative Risk Reduction Total N
0.300 0.250 1,078

Sample size calculation exercise - Continued

  • Assuming 0.3 CER, we need about 1078 patients to detect at least 0.25 RRR with 80% power at 5% type I error rate

\(~\)

  • But what if we were wrong about the event rate?

Misspecification of the event rate

  • Very common

  • We made the assumption of 0.3 for CER

  • But what if CER was 10% lower (0.27)?

    • or even 20% lower (0.24)?

Effects of misspecification of the event rate on sample size

Sample size required for 80% power at 5% type I error rate
Control Event Rate Relative Risk Reduction Total N
0.300 0.250 1,078
0.270 0.250 1,241
0.240 0.250 1,444
  • When CER is lower, we won’t have 80% power if our N was 1078

    • If CER was actually higher, we are now “over-powered” with the current target

Main challenge with the conventional approach

  • There are many unknowns. It is extremely difficult to guess right or at least uncomfortable making the guesses

\(~\)

  • In conventional trials, we only get one guess

    • If you can predict the future, no problem with the conventional approach

\(~\)

  • But how do we plan clinical trials when we don’t know much about what we are studying? (e.g., COVID-19 at the start of 2020)

Anticipated regret

  • If a study just barely missed its objective but still had a clinically important effect, in retrospect, it would have likely succeeded if the sample size had been slightly larger

\(~\)

  • This might suggest that one should have planned for flexible sample size designs (adaptive trial designs) that can "react" to the accumulating trial data

    • Instead of having to wait when the trial is already finished

\(~\)

  • The ability to anticipate what one might regret and then planning the trial designs around these areas can be effective in increasing the likelihood of study success

Adaptive trial designs

What are adaptive trial designs?

  • The term, adaptive trial designs, is an umbrella term that refers to a group of clinical trial designs that offer pre-planned opportunity to modify aspects of an ongoing trial based on accumulating trial data

\(~\)

  • The unifying property of adaptive trial designs:

    • Use of accumulating interim data based on pre-specified plans that are developed and outlined a priori

\(~\)

Difference between conventional and adaptive trial designs

Recall in conventional trials, we do not use the interim data

  • Conventional designs: A fixed sample size and a single analysis at the end

\(~\)

  • Adaptive trial designs: An umbrella term for various designs where pre-planned opportunity to modify the trial designs are permitted based on interim trial data

Comparison vs conventional trial designs

  • In adaptive trials, we conduct one or more of the planned interim analyses according to the plan developed during the design stage

Motivation for Adaptive trial designs

The main motivation for adaptive trial designs is to learn from the data as they are collected during the trial and act accordingly

  • We can potentially reduce the expected sample size, trial duration, and etc (statistical efficiency)

  • We can potentially improve the ethics of the trial

Common types

Pre-specified adaptations include but not limited to:

  • Stopping the enrollment to a specific study arm (e.g. treatment or dose) or the trial early
  • Refining the sample size
  • Changing the allocation ratio of patients

Common types - continued.

Today, we will review the following:

  • Sequential designs

  • Sample size re-assessment

  • Response adaptive randomization

Sequential designs example

Sequential designs

This is the most common type of adaptive trial designs

  • Refer to a type of trial designs that allow you to stop enrollment early

\(~\)

  • You can decide to allow for early stopping based on:

    • Superiority: There is overwhelming evidence that the treatment works

    • Futility: There is underwhelming evidence for treatment

\(~\)

  • You can allow for both superiority and futility, or one of them only

Motivation for sequential designs

Fail faster, succeed faster

  • In case of overwhelming evidence of efficacy, is completing the full trial necessary?

\(~\)

  • If the treatment is really underwhelming (ineffective), is completing the full trial necessary?

Sample size re-assessment example

Sample size re-assessment

  • Refer to designs where you can re-assess the sample size during the trial

    • You can do it blinded or unblinded
  • With blinded SSR, we do not look at the efficacy data at the level of the study arm, only the overall events and etc

    • For binary endpoint, this can involve looking at the overall number of events observed in the trial thus far
  • Unblinded SSR is commonly combined with sequential designs

    • This one you do look at the data at the study arm level

Response adaptive randomization example

Cautions with response adaptive randomization

  • Refer to designs where you preferentially adapt the allocation ratio during the trial to the study arm(s) that are performing better

    • Theoretically very appealing
  • Challenging to design and implement from both statistical and operational point of view

  • For example, decreasing the allocation to the control does have a trade-off for reduced power

    • In a 2-arm trial, 1:1 equal allocation has the highest power

Adaptive trial designs: Planning and conduct

Trial planning

  • Whether we are planning a clinical trial with conventional trial design (fixed sample size) or adaptive trial designs, we need to ensure the trial is scientifically robust, ethical, and compliant with regulatory standards

  • To ensure the validity and integrity of our trial, we must plan for both statistical and operational aspects

Adaptive trial designs planning

  • Adaptive trial designs is NOT a tool to avoid good planning. Planning of adaptive trial designs is much more comprehensive than conventional trial designs.

  • We go beyond sample size or power calculations and use what’s called statistical simulations that can help minimize anticipated regret by comparing multiple design options for their pros and cons under various scenarios

  • By simulations, we mean the process of generating random data with known properties

    • Operating characteristics represent the average behavior of a design and can be calculated analytically or using simulations

General planning process for adaptive trial designs

Decisions for adaptations

  • Adaptations to be planned before the design stage

  • Adaptations are made based on accumulating (interim) trial data

  • There are often dedicated unblinded statisticians conducting the interim analyses. They will prepare the interim report that will be reviewed by an independent Data Monitoring Committee (DMC)

General steps to adaptive trial conduct

  1. Start the trial after a protocol with robust designs that contains

    • Pre-specified adaptations and decision rules

    • Pre-specified interim analysis plans (e.g. number of interim analyses and when to look at interim data)

  2. Enroll patients and collect enough data to allow for reasonable decision-making before the first interim analysis The period is called a burn-in period

  3. If decision rules are met, make adaptation(s). If not, continue the trial without adaptation(s)

General steps to adaptive trial conduct - continued

Requirements for pre-specified adaptations

  • Our plans on interim analyses include specifications of:

    • When will the first interim analysis occur?

      • The period before the first interim analysis - period in which we allow data to accumulate (Burn-in period)
    • How many interim analyses will we conduct? And how frequently?

    • What adaptations will be allowed? What are the decision criteria?

Requirements for pre-specified adaptations - continued.

  • In addition to these statistical rules, we specify plans to prevent operational biases

    • Who will conduct the analyses? Who will be blinded and who will not be?
  • We need to ensure firewall between unblinded statisticians and Data Monitoring Committee (DMC) with Trial Steering Committee and sponsor

Adaptive trial design execution

  • After the burn-in, we start to conduct one or more interim analyses

    • If the criteria for adaptation(s) are met, we adapt.

      • Otherwise, we continue on to the next analysis
  • Adaptive designs do not necessarily impose adaptations, as adaptations may not be needed during the trial if the data says otherwise

  • We only adapt if the pre-specified decision criteria are met

Existing standards, reporting guidelines, and practical considerations

Standards of adaptive trial designs

Prospective specification of adaptations and analysis

  • We need statistical analysis plan with pre-specified models for interim monitoring and final analysis, priors, and assumptions

Evaluation of statistical (operating) characteristics

  • We need to characterize the statistical (operating) characteristics such as type I error rate, power, and expected sample size

    • We often need to submit our adaptive trial designs plans to the FDA or other regulatory agencies
  • Beneficial to use simulations to evaluate characteristics beyond the statistical characteristics mentioned above. For instance, sometimes simulations can be used for resource planning in terms of drug supplies and etc.

Standards of adaptive trial designs - continued.

Communication and vetting of trial design with key stakeholders

  • Key stakeholders may include clinical investigators, institutional review boards, data safety monitoring boards, patient representatives and groups, potential participants, funders, and regulatory agencies

  • Important that key stakeholders understand proposed design

Standards of adaptive trial designs - continued.

Ensure clinical trial infrastructure is adequate to support planned adaptation(s)

  • Demonstrate that the infrastructure can allow for timely electronic data capture, medical monitoring, and data transfers, and is able to implement adaptations in a timely manner

Standards of adaptive trial designs - continued.

Consider sources of operational bias and implement ways to minimize

  • Operational bias occurs when information about the ongoing trial causes changes to the participant pool, investigator behavior, or other aspects that affect the the conduct of the trial

  • Ensure proper oversight of the trial by the Data Monitoring Committee (DMC)

Proactivity of data monitoring committee

The FDA Adaptive Design Guidance:

  • “Because the DMC is unblinded to interim study results it can help implement the adaptation decision according to the prospective adaptation algorithm, but it should not be in a position to otherwise change the study design except for serious safety-related concerns that are the usual responsibility of a DMC”

Standards of adaptive trial designs - continued.

The reporting of adaptive trials should be consistent with the CONSORT statement

  • In 2020, Dimairo and colleagues published their CONSORT extension to adaptive designs

Reporting guidelines

  • The Adaptive designs CONSORT Extension (ACE) statement co-highlights the importance of pre-specifying:

    • Pre-specifying the primary and secondary outcomes as well as which outcome is being used to make the decision of adapting (Item 6a)

    • Plans for burn-in, interim analyses, and the trial’s decision rules (Item 7b)

    • Conducting simulations under plausible scenarios to investigate operating characteristics (Item 7a) and reporting on impact on estimation bias (Item 12b)

    • Making unplanned changes to the outcome transparent (Item 6b) and etc…

Several good examples of how to report adaptive clinical trials in this manuscript

Practical considerations

  • Planning adaptive trial designs require resources and time

    • Best to plan ahead with key stakeholders with statistical, content, and operational expertise to make the trial possible
  • Customized education and training plans will likely be required for the vendors, investigators, and other personnel involved in the trial

    • Critical thinking is needed from the personnel involved to create flexible technology systems and procedures required to execute these clinical trials
  • During the conduct, it is important to document what happened, maintain proper firewall and manage external communications effectively

Practical considerations - continued.

  • Regardless of the trial design, the successful conduct of any clinical trial requires cross-functional coordination and cooperation

  • Successful implementation of complex trial designs likely require much more coordination and cooperation

  • Having a strong, diverse, and collaborative team is an asset

  • It might be important to honestly ask and be upfront, if we have the relevant expertise available to design and execute the trial

Summary

Final comments

  • Adaptive trial designs are an important tool for clinical trial research. These data-driven approaches can result in important statistical efficiencies.

  • No designs should be chosen by default. The decision to choose conventional design versus adaptive trial design requires careful considerations.

  • In my experience, this planning process with simulations and collaboration does make the overall trial better, regardless of the final design you end up choosing.

  • Adaptive trial designs can be more challenging and complex to plan and execute than conventional trials. It generally does take longer time and more efforts to plan, but the efforts are generally well worth it

References

  1. Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, Mander AP, Weir CJ, Koenig F, Walton MK, Nicholl JP. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ 2020 Jun 17;369

  2. Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, Holmes J, Mander AP, Odondi LO, Sydes MR, Villar SS. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC medicine. 2018 Dec;16(1):1-5

  3. Thorlund K, Haggstrom J, Park JJ, Mills EJ. Key design considerations for adaptive clinical trials: a primer for clinicians. BMJ. 2018 Mar 8;360

  4. A Practical Adaptive & Novel Designs and Analysis (PANDAS) toolkit

  5. Detry MA, Lewis RJ, Broglio KR, Connor JT, Berry SM, Berry DA. Standards for the design, conduct, and evaluation of adaptive randomized clinical trials. Patient-Centered Outcomes Research Institute (PCORI) Guidance Report. 2012 Mar.

Book

Our book on adaptive trial designs and master protocols